39 research outputs found
A novel simulation framework for modelling extracellular recordings in cortical tissue : implementation, validation and application to gamma oscillations in mammals
PhD ThesisThis thesis concerns the simulation of local field potentials (LFPs) from
cortical network activity; network gamma oscillations in particular. Alterations
in gamma oscillation measurements are observed in many brain
disorders. Understanding these measurements in terms of the underlying
neuronal activity is crucial for developing effective therapies. Modelling
can help to unravel the details of this relationship.
We first investigated a reduced compartmental neuron model for use in
network simulations. We showed that reduced models containing <10
compartments could reproduce the LFP characteristics of the equivalent
full-scale compartmental models to a reasonable degree of accuracy.
Next, we created the Virtual Electrode Recording Tool for EXtracellular
Potentials (VERTEX): a Matlab tool for simulating LFPs in large,
spatially organised neuronal networks.
We used VERTEX to implement a large-scale neocortical slice model
exhibiting gamma frequency oscillations under bath kainate application,
an experimental preparation frequently used to investigate properties of
gamma oscillations. We built the model based on currently available
data on neocortical anatomy. By positioning a virtual electrode grid
to match Utah array placement in experiments in vitro, we could make
a meaningful direct comparison between simulated and experimentally
recorded LFPs.
We next investigated the spatial properties of the LFP in more detail,
using a smaller model of neocortical layer 2/3. We made several observations
about the spatial features of the LFP that shed light on past
experimental recordings: how gamma power and coherence decays away
from an oscillating region, how layer thickness affects the LFP, which
neurons contribute most to the LFP signal, and how the LFP power
scales with frequency at different model locations.
Finally, we discuss the relevance of our simulation results to experimental
neuroscience. Our observations on the dominance of parvalbumin-expressing
basket interneuron synapses on the LFP are of particular relevance to epilepsy
and schizophrenia: changes in parvalbumin expression have been
observed in both disorders. We suggest how our results could inform
future experiments and aid in the interpretation of their results
Sanity Checks for Saliency Metrics
Saliency maps are a popular approach to creating post-hoc explanations of
image classifier outputs. These methods produce estimates of the relevance of
each pixel to the classification output score, which can be displayed as a
saliency map that highlights important pixels. Despite a proliferation of such
methods, little effort has been made to quantify how good these saliency maps
are at capturing the true relevance of the pixels to the classifier output
(i.e. their "fidelity"). We therefore investigate existing metrics for
evaluating the fidelity of saliency methods (i.e. saliency metrics). We find
that there is little consistency in the literature in how such metrics are
calculated, and show that such inconsistencies can have a significant effect on
the measured fidelity. Further, we apply measures of reliability developed in
the psychometric testing literature to assess the consistency of saliency
metrics when applied to individual saliency maps. Our results show that
saliency metrics can be statistically unreliable and inconsistent, indicating
that comparative rankings between saliency methods generated using such metrics
can be untrustworthy.Comment: Accepted for publication at the Thirty Fourth AAAI conference on
Artificial Intelligence (AAAI-20
Reasoning and learning services for coalition situational understanding
Situational understanding requires an ability to assess the current situation and anticipate future situations, requiring both pattern recognition and inference. A coalition involves multiple agencies sharing information and analytics. This paper considers how to harness distributed information sources, including multimodal sensors, together with machine learning and reasoning services, to perform situational understanding in a coalition context. To exemplify the approach we focus on a technology integration experiment in which multimodal data — including video and still imagery, geospatial and weather data — is processed and fused in a service-oriented architecture by heterogeneous pattern recognition and inference components. We show how the architecture: (i) provides awareness of the current situation and prediction of future states, (ii) is robust to individual service failure, (iii) supports the generation of ‘why’ explanations for human analysts (including from components based on ‘black box’ deep neural networks which pose particular challenges to explanation generation), and (iv) allows for the imposition of information sharing constraints in a coalition context where there is varying levels of trust between partner agencies
Stakeholders in explainable AI
There is general consensus that it is important for artificial
intelligence (AI) and machine learning systems to be explainable
and/or interpretable. However, there is no general
consensus over what is meant by ‘explainable’ and ‘interpretable’.
In this paper, we argue that this lack of consensus
is due to there being several distinct stakeholder communities.
We note that, while the concerns of the individual
communities are broadly compatible, they are not identical,
which gives rise to different intents and requirements for explainability/
interpretability. We use the software engineering
distinction between validation and verification, and the epistemological
distinctions between knowns/unknowns, to tease
apart the concerns of the stakeholder communities and highlight
the areas where their foci overlap or diverge. It is not
the purpose of the authors of this paper to ‘take sides’ — we
count ourselves as members, to varying degrees, of multiple
communities — but rather to help disambiguate what stakeholders
mean when they ask ‘Why?’ of an AI
Integrating learning and reasoning services for explainable information fusion
—We present a distributed information fusion system
able to integrate heterogeneous information processing services
based on machine learning and reasoning approaches. We focus
on higher (semantic) levels of information fusion, and highlight
the requirement for the component services, and the system as
a whole, to generate explanations of its outputs. Using a case
study approach in the domain of traffic monitoring, we introduce
component services based on (i) deep neural network approaches
and (ii) heuristic-based reasoning. We examine methods for
explanation generation in each case, including both transparency
(e.g, saliency maps, reasoning traces) and post-hoc methods
(e.g, explanation in terms of similar examples, identification of
relevant semantic objects). We consider trade-offs in terms of
the classification performance of the services and the kinds of
available explanations, and show how service integration offers
more robust performance and explainability